In this session we will present the key concept of Column Generation and illustrate how it can be applied to Vehicle Routing. We will also discuss difference practical issues of Column Generation and pricing strategy such as Dynamic Programming and Constraint Programming. The second part of the class will address a practical routing problem which arises in the field of Homecare, where we will combine column generation and LNS.Download the CG exercise sheet.
View presentationLouis-Martin Rousseau Professor, Polytechnique Montréal
After completing his PhD in computer science and operations research at Université de Montréal, Louis-Martin Rousseau joined the Mathematics and Industrial Engineering Department at École Polytechnique de Montréal in 2003. Louis-Martin was one of the first researchers to investigate the hybridization of classical operations research methods and constraint programming, which comes from artificial intelligence. His current research focuses on transportation logistics, scheduling and resource optimization in healthcare.
Mini-CP challenge! The participants will be challenged on a given problem both in term of modeling and implementation of a solution in their own standalone tool.
View presentationPierre Schaus Assistant Professor, UCLouvain
Pierre Schaus obtained my Ph.D. from the UCLouvain University in 2009. He spent 5 months at Brown University working with Pascal Van Hentenryck. Then I joined the Dynadec startup to work on Comet during two years before working two more years at N-SIDE. He is now at Assistant Professor @UCLouvain since September 2012. His research team develops the www.oscarlib.org CP solver.
The goal of this tutorial is to give an overview of a key contribution of constraint programming to the domain of scheduling: efficient algorithms for reasoning about complex resources. I will first survey well known algorithms for propagating disjunctive and cumulative renewable resource constraints. In particular, the timetabling, edge-finding and energetic reasoning algorithms will be analyzed in depth. Then, less standard types of resources will be introduced, such as cache and capacity resources together with some algorithms to reason about them.
View presentationEmmanuel Hébrard Researcher, LAAS-CNRS
Emmanuel Hébrard is a research fellow at the CNRS in Toulouse. His research in constraint programming has had many applications in scheduling, with in particular a contribution to scheduling the activities of the lander Philae on the comet Churyumov-Gerasimenko.
The exploitation of flexibility in energy production and consumption is essential to avoid costly reinforcements of the power system and maintain security of supply while increasing the penetration of renewable (and intermittent) sources of energy. Let us focus on the "demand" side, i.e., on actors which are mostly consumers of energy: manufacturing plants, water networks, commercial buildings (or even elements in a building such as an HVAC (Heating, Ventilation and Air Conditioning) system, an elevator or a pool of elevators), and aggregations of those, such as a district. These actors use energy for their own needs, i.e., manufacture and deliver products to their own customers, deliver drinkable water to their customers, provide a comfortable work place to the employees and visitors of the building, etc. They may also produce energy, either as part of the process they manage (e.g., cogeneration in an industrial plant, elevator braking energy recovery) or with energy production resources installed for cost reduction and security reasons. Finally, they may also have energy storage resources, which could have been installed to build flexibility or for any other reason. Part or all of the stored energy capacity can then be used to reduce operational costs.
Incentives to reduce or shift energy consumption in time must be considered with respect to the main objectives (and other cost factors) of the consuming organization. The very first thing to do is to characterize the available or potential sources of flexibility and how they could be used to make gains in terms of energy, revenues and cost savings, or environmental impact.
Considered separately, such sources of flexibility induce very different optimization models: multi-criteria regulation; scheduling with energy resource constraints and costs; energy flow optimization. Things get more complex when these sources of flexibility coexist and when multiple actors, each with its own constraints and objectives, share the same energy network. Practical optimization problems will be presented, using examples from two European projects (Arrowhead and Ambassador) and from the two PhD theses of Chloé Desdouits "Reduction of electricity consumption peaks and optimization problems induced on the demand side" and Peter Pflaum "Energy management strategies for smart grids".
View presentationClaude Le Pape VP Optimization and Analytics, Schneider Electric
Claude Le Pape is in Schneider Electric in charge of coordinating the evaluation of new technologies, the recognition of research and development experts, and the management of research and development projects and partnerships in the area of optimization and analytics. He received a PhD in Computer Science from University Paris XI and a Management Degree from "College des Ingénieurs" in 1988. From 1989 to 2007, he was successively postdoctoral student at Stanford University, consultant and software developer at ILOG S.A., senior researcher at Bouygues S.A., and R&D team leader at Bouygues Telecom and ILOG S.A. He participated to several European research projects and to the development of many optimization software tools and industrial applications in various domains, including mixture design, inventory management, long-term personnel planning, construction site scheduling, and manufacturing scheduling.
Many users are disappointed when two runs of the same solver sometimes do not return the same solution. This notion is often called the determinism of a solver or of an algorithm. But, many users are quite disappointed by the non consistency of a solution returned by a software according to the computing effort used by the software to find the solution. Many users are disappointed when a solution returned by software for a given time limit is not as good as a solution returned by the same software with a bigger time limit, Many users are disappointed when a parallel algorithm that uses 4 cores sometimes solves a problem faster than the same algorithm using 8 cores. In many cases, users expect a quality of a solution to evolve in relation with the computing power used to solve this solution. We discuss in this talk an extension of determinism we call consistency of an algorithm. We will show what is possible to do with sequential or parallel search algorithms that are an abstraction of search algorithms used to solve problems like CSP, SAT or MIP.
View presentationBertrand Le Cun Software Engineer, Google Inc.
Assistant professor for almost 20 years in Nanterre University (France), and associated researcher in the PRiSM laboratory of Versailles university, Bertrand Le Cun had moved to Google France in late 2014. His research deals with the design of algorithms to solve combinatorial optimization problems, but also with the design of generic software for a sequential and parallel environments. He has worked for problem like 2D-guillotin Cutting Stock Problem, Quadratic assignment problem (2D and 3D), Vehicle routing problem, Crew scheduling, Energy optimization, Fragmentable bin-packing.The methods are A*, Dynamic programming, branch and bound, branch and price branch and price and cut for both sequential or parallel environment (Grid/Cloud).
Pierre Flener Professor, Uppsala University
Pierre Flener is a Professor of Computing Science at Uppsala University (Sweden), where he is the leader and founder of the ASTRA Research Group on Combinatorial Optimisation, which specialises on global constraints, constraint-based modelling, and constraint-based local search. He was elected to the Executive Committee of the Association for Constraint Programming (ACP) and was its conference coordinator from 2013 to 2016. He has himself co-chaired CP 2013 and the ACP Summer School 2011.
David Coudert Director of research, INRIA
David Coudert is Research Director at Inria Sophia Antipolis – Méditerranée and the scientific leader of COATI, a join project-team between Inria and the I3S (CNRS, UNS) laboratory. He obtained his Ph.D. and HDR in computer science from University Nice Sophia Antipolis in 2001 and 2010 respectively. His research interests include algorithmic graph theory, combinatorial optimization and operations research with communication networks as main application area (routing, fault tolerance, reliable design, etc.). He is particularly interested in decomposition and pre-processing methods that can be used to better understand the structure of graphs and he uses this information in the design of algorithms. He collaborates with industrial partners such as Alcatel-Bell labs, Orange labs, 3Roam, Amadeus, Benomad, Instant-System. He contributes the development of graph optimization libraries such as Grph and SageMath. In 2016, he won together with Nathann Cohen the Flinders Hamiltonian Cycle Problem Challenge, an international competition on the famous Hamiltonian Cycle problem.
Private and public cloud data centers are platforms of choice to host a large scope of applications. Coupled with the use of Virtual Machines (VMs) to wrap the applications, they exhibit small operation costs thanks to workload consolidation. The VM scheduler is the cornerstone of a performant and profitable virtualized infrastructure. The provider bases his offering and the clients base their requirements on its features. Developing a VM scheduler is however a challenging task. Above all, the developer must overcome the theoretical foundations of the problem and their consequences over the problem scalability. He must also provide an architecture that is flexible enough to support the evolving requirements of the users. The quality of a scheduler might then always be the consequence of a good match between its reasoning capabilities, the infrastructure particularities and the workload profile. It might then be vain to look for the perfect pre-packaged VM scheduler and start to package its own.
In this talk, I will discuss about the open source VM scheduler BtrPlace. BtrPlace allows third party developers to enhance its reasoning capabilities and make it support new, possibly unique concerns. The flexibility and the reasoning capabilities of BtrPlace are granted by the extensive use of Constraint Programming and the Choco solver. During the talk, I will present the VM placement problem, its model and its implementation. I will also review the different design choices we made for the last ten years of development and their consequences in terms of capabilities and performance. BtrPlace is developed by the University of Nice Sophia Antipolis since 2011. It is a complete rewrite of Entropy, the project initiated in 2006 at the Mines de Nantes. Despite being initially a research prototype, BtrPlace is used in production, because of its flexibility. Nutanix, a provider of entreprise clusters uses it in thousands of private clouds worldwide to mitigate local load spikes.
View presentationFabien Hermenier Software Engineer, Nutanix
Fabien Hermenier received the Ph.D degree in 2009 from the University of Nantes. Since 2006, he is working on resource management algorithms for IaaS infrastructures. His results are consolidated in BtrPlace, an open source solution for virtual machine placement based on Constraint Programming.Fabien Hermenier was an associate professor at University of Nice Sophia-Antipolis from 2011 to 2017. In 2017, he joined Nutanix to support the industrialisation of BtrPlace and address new challenges related to resource management in entreprise clouds.